-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix operator upgrades #587
Conversation
I have tested for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have questions... I get that this works but isn't version info necessary? what happens when there are multi-versions in the cluster? wouldn't we want version on all things (except statefulsets)? can we use annotations for the version?
thinking more about this... I think we need a different solution. The label for version seems meaningful. There are objects which can not be updated... those objects [StatefulSets, Jobs] should never be updated... the job was a job for a specific version. and SS will be good only for a version... if an upgrade happens... a backup is invoked... the SS is destroyed and replaced (with a new version) and a restore happens. |
@kensipe why would you reimplement rolling updates on statefulsets if you can reuse what is inside kubernetes? K8s allow that and as long as you don't touch persistent volumes, you can update anything you want... Sure, if you need to change something around persistence, you need to do what you're proposing If you touch PV, kubernetes won't allow you to do that update... so it's "safe". Yeah we should probably think about how to document/ease for operator developers situations when they need to touch PV and create that extra upgrade plan, but I think that is a different issue than this one |
@alenkacz I'm not sure where you are coming from... I never suggested not using k8s functionality... of course we should. I am suggesting that there are protected k8s objects which KUDO should be aware of and treat appropriately. I don't think I like the model of attempt and if it works good and if it fails it must not be allowed so we are good. I definitely don't like the model of not supporting multiple versions of a operator and would like to have solutions around that in the long term. |
Put another way... I would prefer to NOT remove versions from labels and that for most object this makes sense. For objects for which this isn't allowed we should have other mechanisms for management. I'm not convinced this PR make sense. |
to be clear... I am FOR not having a version label on volumeClaims... but this code change isn't just for volumeClaims... it is for "CommonLabels" and I am NOT FOR removing version labels on the other k8s objects. |
@kensipe I would like us to switch the way of reasoning here and instead of telling what you want to change and how I need to see what you want to achieve with that, because I am missing that from the conversation. Tell me where you need the version inside labels and why. Where would not having that label actually cause a real problem of e.g. selector picking up the wrong resources?
This is nothing new or anything that would change with this PR. You CANNOT have multiple instances inside the same namespace with the same name. That is just how k8s works. You cannot have multiple apps of the same name inside Marathon as well. And when you use a different name when you install different version of operator, you don't need the label with version because filtering on name is enough. This PR does not affect us supporting multiple versions in ANY WAY.
I don't agree with that. There are things you can allow to update safely and if that's the case we should do it in the normal k8s way without the overhead of doing backup and restore. I am going to spend some time today going through some k8s docs and making sure how selectors work and where we can actually omit the labels and where not to gather the facts for what I proposed in the first paragraph. |
btw. there is no easy way with kustomize to skip adding labels to some parts of yaml although it's pretty requested feature :( kubernetes-sigs/kustomize#817 We could write a plugin but that seems like a pretty big effort... |
After doing a lot of reading my summary is:
|
I'll be working on #601 in the meantime |
What type of PR is this?
/kind bug
What this PR does / why we need it:
controller was adding version labels with every patch of an object. This is fine for all objects except statefulsets.
that was causing the same error as helm was dealing with here helm/charts#7803
Which issue(s) this PR fixes:
Fixes #557
Special notes for your reviewer:
Does this PR introduce a user-facing change?: